7 research outputs found

    On extending the ADMM algorithm to the quaternion algebra setting

    Get PDF
    Many image and signal processing problems benefit from quaternion based models, due to their property of processing different features simultaneously. Recently the quaternion algebra model has been combined with the dictionary learning and sparse representation models. This led to solving versatile optimization problems over the quaternion algebra. Since the quaternions form a noncommutative algebra, calculation of the gradient of the quaternion objective function is usually fairly complex. This paper aims to present a generalization of the augmented directional method of multipliers over the quaternion algebra, while employing the results from the recently introduced generalized HR (GHR) calculus. Furthermore, we consider the convex optimization problems of real functions of quaternion variable

    Octonion sparse representation for color and multispectral image processing

    Get PDF
    A recent trend in color image processing combines the quaternion algebra with dictionary learning methods. This paper aims to present a generalization of the quaternion dictionary learning method by using the octonion algebra. The octonion algebra combined with dictionary learning methods is well suited for representation of multispectral images with up to 7 color channels. Opposed to the classical dictionary learning techniques that treat multispectral images by concatenating spectral bands into a large monochrome image, we treat all the spectral bands simultaneously. Our approach leads to better preservation of color fidelity in true and false color images of the reconstructed multispectral image. To show the potential of the octonion based model, experiments are conducted for image reconstruction and denoising of color images as well as of extensively used Landsat 7 images

    Hypercomplex algebras for dictionary learning

    Get PDF
    This paper presents an application of hypercomplex algebras combined with dictionary learning for sparse representation of multichannel images. Two main representatives of hypercomplex algebras, Clifford algebras and algebras generated by the Cayley-Dickson procedure are considered. Related works reported quaternion methods (for color images) and octonion methods, which are applicable to images with up to 7 channels. We show that the current constructions cannot be generalized to dimensions above eight

    Acoustic seafloor classification using the Weyl transform of multibeam echosounder backscatter mosaic

    Get PDF
    The use of multibeam echosounder systems (MBES) for detailed seafloor mapping is increasing at a fast pace. Due to their design, enabling continuous high-density measurements and the coregistration of seafloor’s depth and reflectivity, MBES has become a fundamental instrument in the advancing field of acoustic seafloor classification (ASC). With these data becoming available, recent seafloor mapping research focuses on the interpretation of the hydroacoustic data and automated predictive modeling of seafloor composition. While a methodological consensus on which seafloor sediment classification algorithm and routine does not exist in the scientific community, it is expected that progress will occur through the refinement of each stage of the ASC pipeline: ranging from the data acquisition to the modeling phase. This research focuses on the stage of the feature extraction; the stage wherein the spatial variables used for the classification are, in this case, derived from the MBES backscatter data. This contribution explored the sediment classification potential of a textural feature based on the recently introduced Weyl transform of 300 kHz MBES backscatter imagery acquired over a nearshore study site in Belgian Waters. The goodness of the Weyl transform textural feature for seafloor sediment classification was assessed in terms of cluster separation of Folk’s sedimentological categories (4-class scheme). Class separation potential was quantified at multiple spatial scales by cluster silhouette coefficients. Weyl features derived from MBES backscatter data were found to exhibit superior thematic class separation compared to other well-established textural features, namely: (1) First-order Statistics, (2) Gray Level Co-occurrence Matrices (GLCM), (3) Wavelet Transform and (4) Local Binary Pattern (LBP). Finally, by employing a Random Forest (RF) categorical classifier, the value of the proposed textural feature for seafloor sediment mapping was confirmed in terms of global and by-class classification accuracies, highest for models based on the backscatter Weyl features. Further tests on different backscatter datasets and sediment classification schemes are required to further elucidate the use of the Weyl transform of MBES backscatter imagery in the context of seafloor mapping

    The Euler-Bernoulli equation with distributional coefficients and forces

    No full text
    In this work we investigate a very weak solution to the initial-boundary value problem of an Euler-Bernoulli beam model. We allow for bending stiffness, axial-and transversal forces as well as for initial conditions to be irregular functions or distributions. We prove the well-posedness of this problem in the very weak sense. More precisely, we define the very weak solution to the problem and show its existence and uniqueness. For regular enough coefficients we show consistency with the weak solution. Numerical analysis shows that the very weak solution coincides with the weak solution, when the latter exists, but also offers more insights into the behaviour of the very weak solution, when the weak solution doesn’t exist

    On interpretability of CNNs for multimodal medical image segmentation

    No full text
    Despite their huge potential, deep learning-based models are still not trustful enough to warrant their adoption in clinical practice. The research on the interpretability and explainability of deep learning is currently attracting huge attention. Multilayer Convolutional Sparse Coding (ML-CSC) data model, provides a model-based explanation of convolutional neural networks (CNNs). In this article, we extend the MLCSC framework towards multimodal data for medical image segmentation, and propose a merged joint feature extraction ML-CSC model. This work generalizes and improves upon our previous model, by deriving a more elegant approach that merges feature extraction and convolutional sparse coding in a unified framework. A segmentation study on a multimodal magnetic resonance imaging (MRI) dataset confirms the effectiveness of the proposed approach. We also supply an interpretability study regarding the involved model parameters
    corecore